-
-
Notifications
You must be signed in to change notification settings - Fork 144
run pytest against nightly #238
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Might be good to file an issue with pyarrow to handle pandas nightly
|
I guess only an issue on older pythons..... |
Looking at this, I don't see where you install the nightly build, but maybe I'm missing something or you're still working on that part?? |
This install (and uninstalls nightly): pandas-stubs/scripts/test/__init__.py Line 52 in cc6ea9b
The issue is that I don't yet know how to make github ignore errors. |
Add |
I think it was here, which I didn't remember was in the code: pandas-stubs/scripts/test/run.py Line 83 in 80f561f
|
Thank you! I simplify the install/uninstall. Why is |
It has to do with how pip check versions. Legacy takes the latest. Without this it will download every whl file to get Metadata before deciding on the best fit. |
I'm surprised we get these warnings:
I thought that change is already reverted on main |
Reopen to trigger CI |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So while we don't exit on failure if the test against nightly builds fails, if there was a failure, would it show up red in the CI report?
It will always be "green". As far as I know there is no "yellow". |
There might be a way to mark the CI run as skipped if it fails. Then we would have a visual indicator. |
Without any visual indicator, I'm not sure how we would ever notice that something was revealed via this test. Can you investigate that? |
I didn't find a way to visually indicate an error except by failing (failing now). I also made pytest error on any warnings. This fails currently on nightly but #251 might fix that |
So I like what the PR is doing, but it is raising some questions in how we want to manage the project. Opinions wanted!
|
Could also make pytest-nightly run in its own workflow, then it is clear that ubuntu+3.10 did not fail for any other reason. |
If pandas nightly causes all new PRs to have errors - I would not expect people to fix that as part of their PR. But if people want to add new annotations that trigger future/deprecation warnings, I would be inclined to request changes :)
In either case, it probably makes sense to have a separate workflow just for nightly.
I really like the current simplicity of pandas-stubs. If there are cases where we do not (yet) want to be in-line with nightly, we could simply have a pytest skip decorator based on the pandas version. |
So let's do that, and we'll operate on the principle that if the nightly fails, and everything else passes, then we'll accept the PR.
Me too!!!
Good idea |
I will mark this PR as draft and then debug sharing steps between the general checks and pytest nightly. |
Might be worth waiting for #251 to keep the nice green check mark :) |
This reverts commit fc51e9b.
Can you make test/nightly orange for failure rather than red? |
I did not find an option for that. |
statsmodels has it on their pip pre run if pytest fails. If you merge this I'll see if I can figure out what is needed in a later PR. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks @twoertwein
I think there is a way to mark a run as "allowed to fail". At least I remember seeing it somewhere. Hopefully @bashtage can figure it out! |
If it is not possible to make it warn-only but not fail, an option could be to run it only on main (after PRs are merged) or the reverse (only for PRs but not on main): then we have some more green :) |
Looks pretty hopeless for now. Seems GHA doesn't have a yellow/warn mode despite the similarities to Azure (the I suspect that it runs on Azure). |
I would always pass the run and have it there in CI so that interested parties could easily see what doing on. The alternative would be to add 1 Azure run which could be configured to be yellow. If you want the Azure run, I can set it up. |
assert_type()
to assert the type of any return valueNot sure whether it will succeed if pytest fails.